19 research outputs found

    Database NewSQL performance evaluation for big data in the public cloud

    Get PDF
    For very years, relational databases have been the leading model for data storage, retrieval and management. However, due to increasing needs for scalability and performance, alternative systems have emerged, namely NewSQL technology. NewSQL is a class of modern relational database management systems (RDBMS) that provide the same scalable performance of NoSQL systems for online transaction processing (OLTP) read-write workloads, while still maintaining the ACID guarantees of a traditional database system. In this research paper, the performance of a NewSQL database is evaluated, compared to a MySQL database, both running in the cloud, in order to measure the response time against different configurations of workloads.Instituto de Investigación en Informátic

    Equivalence of three-dimensional spacetimes

    Full text link
    A solution to the equivalence problem in three-dimensional gravity is given and a practically useful method to obtain a coordinate invariant description of local geometry is presented. The method is a nontrivial adaptation of Karlhede invariant classification of spacetimes of general relativity. The local geometry is completely determined by the curvature tensor and a finite number of its covariant derivatives in a frame where the components of the metric are constants. The results are presented in the framework of real two-component spinors in three-dimensional spacetimes, where the algebraic classifications of the Ricci and Cotton-York spinors are given and their isotropy groups and canonical forms are determined. As an application we discuss Goedel-type spacetimes in three-dimensional General Relativity. The conditions for local space and time homogeneity are derived and the equivalence of three-dimensional Goedel-type spacetimes is studied and the results are compared with previous works on four-dimensional Goedel-type spacetimes.Comment: 13 pages - content changes and corrected typo

    Why High-Performance Modelling and Simulation for Big Data Applications Matters

    Get PDF
    Modelling and Simulation (M&S) offer adequate abstractions to manage the complexity of analysing big data in scientific and engineering domains. Unfortunately, big data problems are often not easily amenable to efficient and effective use of High Performance Computing (HPC) facilities and technologies. Furthermore, M&S communities typically lack the detailed expertise required to exploit the full potential of HPC solutions while HPC specialists may not be fully aware of specific modelling and simulation requirements and applications. The COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications has created a strategic framework to foster interaction between M&S experts from various application domains on the one hand and HPC experts on the other hand to develop effective solutions for big data applications. One of the tangible outcomes of the COST Action is a collection of case studies from various computing domains. Each case study brought together both HPC and M&S experts, giving witness of the effective cross-pollination facilitated by the COST Action. In this introductory article we argue why joining forces between M&S and HPC communities is both timely in the big data era and crucial for success in many application domains. Moreover, we provide an overview on the state of the art in the various research areas concerned

    Analysis and processing aspects of data in big data applications

    No full text

    Edge localisation in MRI for images with signal‐dependent noise

    No full text

    Evaluation model for Big Data integration tools

    No full text
    Given the growing demand and need by enterprises for data and information to positively support the decision-making process, there is no doubt about the importance of selecting the correct and appropriate integration tool for the different types of business. For this reason, the essential objective of this study is to create a model that will serve as a basis to evaluate the different alternatives and solutions that exist in the market able to overcome the big data integration challenges. The evaluation process of data integration product begins with the definition and prioritisation of critical requirements and criteria. In this evaluation model, the characteristics evaluated are categorised into three main groups: ease of integration and implementation, quality of service and support, and costs. After identifying the essential criteria and characteristics, it is necessary to determine the weights that these criteria should have in the evaluation. Then, it needs to verify which solutions existing in the market best fit the needs of the business and can satisfy them more effectively. And lastly, compare those solutions adopting this framework. It is essential to carry out a weighted evaluation, based on well-defined criteria like ease of use, quality of technical support, data privacy and security. This process is fundamental to verify if the solution offers what the organization needs if it meets the business requirements and their integration needs.This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/201
    corecore